74 research outputs found

    Memory-aware Scheduling for Energy Efficiency on Multicore Processors

    Get PDF
    Memory bandwidth is a scarce resource in multicore systems. Scheduling has a dramatic impact on the delay introduced by memory contention, but also on the effectiveness of frequency scaling at saving energy. This paper investigates the cross-effects between tasks running on a multicore system, considering memory contention and the technical constraint of chip-wide frequency and voltage settings. We make the following contributions: 1) We identify the memory characteristics of tasks and sort core-specific runqueues to allow a co-scheduling of tasks with minimal energy delay product. 2) According to the memory characteristics of the workload, we set the frequency for individual chips so that the delay is only marginal. Our evaluation with a Linux implementation running on an Intel quad-core shows that memory-aware scheduling can reduce EDP considerably

    Reducing Response Time with Preheated Caches

    Get PDF
    CPU performance is increasingly limited by thermal dissipation, and soon aggressive power management will be beneficial for performance. Especially, temporarily idle parts of the chip (including the caches) should be power-gated in order to reduce leakage power. Current CPUs already lose their cache state whenever the CPU is idle for extended periods of time, which causes a performance loss when execution is resumed, due to the high number of cache misses when the working set is fetched from external memory. In a server system, the first network request during this period suffers from increased response time. We present a technique to reduce this overhead by preheating the caches in advance before the network request arrives at the server: Our design predicts the working set of the server application by analyzing the cache contents after similar requests have been processed. As soon as an estimate of the working set is available, a predictable network architecture starts to announce future incoming network packets to the server, which then loads the predicted working set into the cache. Our experiments show that, if this preheating step is complete when the network packet arrives, the response time overhead is reduced by an average of 80%

    When Physical is Not Real Enough

    Get PDF
    This position paper argues that policies for physical memory management and for memory power mode control should be relocated to the system software of a programmable memory management controller (MMC). Similarly to the mapping of virtual to physical addresses done by an MMU of a processor, this controller offers another level of mapping from physical addresses to real addresses in a multi-bank multi-technology (DRAM, MRAM, FLASH) memory system. Furthermore, the programmable memory controller is responsible for the allocation and migration of memory according to power and performance demands. Our approach dissociates the aspects of memory protection and sharing from the aspect of energy-aware management of real memory. In this way, legacy operating systems do not have to be extended to reduce memory power dissipation, and power-aware memory is no longer limited to CPUs with an MMU

    European innovation partnership on active and healthy ageing: triggers of setting the headline target of 2 additional healthy life years at birth at EU average by 2020

    Get PDF
    The popular IEEE 802.11 standard defines a power saving mode that keeps the network interface in a low power sleep state and periodically powers it up to synchronize with the base station. The length of the sleep interval, the so called beacon period, affects two dimensions, namely application performance and energy consumption. The main disadvantage of this power saving policy lies in its static nature: a short beacon period wastes energy due to frequent activations of the interface while a long beacon period can cause diminished application responsiveness and performance. While the first aspect, reduction of power consumption, has been studied extensively, the implications on application performance have received only little attention. We argue that the tolerable reduction of performance or quality depends on the application and the user. As an example, a beacon period of only 100ms slows down RPC-based operations like NFS dramatically, while the user will probably not recognize the additional delay when using a web browser. If at all, known power management algorithms guarantee a system wide limit on performance degradation without differentiating between different application profiles. This work presents an approach to identify on-line the currently running application class by a mapping of network traffic characteristics to a predefined set of application profiles. We propose a power saving policy which dynamically adapts to the detected application profile, thus identifying the application- and user-specific power/performance trade-off. An implementation of the characterization algorithm is presented and evaluated running several typical applications for mobile devices

    Energy Management for Hypervisor-Based Virtual Machines

    Get PDF
    Current approaches to power management are based on operating systems with full knowledge of and full control over the underlying hardware; the distributed nature of multi-layered virtual machine environments renders such approaches insufficient. In this paper, we present a novel framework for energy management in modular, multi-layered operating system structures. The framework provides a unified model to partition and distribute energy, and mechanisms for energy-aware resource accounting and allocation. As a key property, the framework explicitly takes the recursive energy consumption into account, which is spent, e.g., in the virtualization layer or subsequent driver components. Our prototypical implementation targets hypervisor- based virtual machine systems and comprises two components: a host-level subsystem, which controls machine-wide energy constraints and enforces them among all guest OSes and service components, and, complementary, an energy-aware guest operating system, capable of fine-grained applicationspecific energy management. Guest level energy management thereby relies on effective virtualization of physical energy effects provided by the virtual machine monitor. Experiments with CPU and disk devices and an external data acquisition system demonstrate that our framework accurately controls and stipulates the power consumption of individual hardware devices, both for energy-aware and energyunaware guest operating systems

    Energy Accounting Support in TinyOS

    Get PDF
    Energy is the most limiting resource in sensor networks. This is particularly true for dynamic sensor networks in which the sensor-net application is not statically planned. We describe three components of our energy management system for nodes in such dynamic sensor networks: A flexible energy model and an accounting infrastructure for making sensor nodes energy-aware, and Resource Containers for managing the energy accounting information
    • …
    corecore